perm filename SCIENT.4[LET,JMC] blob
sn#880972 filedate 1990-01-03 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 \input jmclet
C00007 ENDMK
Cā;
\input jmclet
\jmclet
\address
Scientific American
415 Madison Avenue
New York City, NY 10017
\body
Sirs:
When John Searle writes that the man in his Chinese room
could interpret his Chinese text not only as a conversation but
also as a report of a chess game or a stock-market prediction, he
is making an empirical claim that is false for texts of
significant length. Languages that are used for reasoning or for
communication, whether by humans or by computer programs require
redundancy that makes it almost impossible for alternate
intepretations to exist for texts of sufficient length.
Here are some requirements for useful languages, whether
natural or artificial.
1. Fragments of text must usually have meanings. All
natural languages have words and sentences, and initial segments
of sentences have meanings. Long sentences with the verb at the
end are difficult even for German philosophers.
2. When a concept is expressed more than once in a discourse,
similar words and phrases are used. However, the repeated item
will usually have different relations to the other words and phrases
when it recurs. These changing relationships prevent arbitrary
interpretations of long texts.
3. The theory of algorithmic complexity
[Scientific American article by Gregory Chaitin] tells us that
you can give a text an arbitrary interpretation, but you
are likely to need a computer program as long as the longest
text, and it will also have a long running time.
I challenge anyone to take a page of conversation from
a Chinese novel and turn it into a stock market prediction by
changing the interpretations of Chinese characters.
If I'm right that this can't be done, we seem to have a
counterexample to Searle's otherwise vague axiom 3,
{\it ``Syntax by itself is neither constitutive of
nor sufficient for semantics.''}
When artificial intelligence has advanced to make possible
a Chinese room program capable of an intelligent conversation,
its database will contain much explicit data that its designers
will intend as expressing linguistic and world knowledge. At that
time, it will be appropriate for philosophers to argue about
whether this database admits any other interpretation than that
knowledge.
This common sense conclusion is supported by the fact
that cryptograms have unique solutions, by statistical
communication theory and by algorithmic complexity theory.
When children learn their native languages, the utterances
of their parents are accompanied by gestures in particular
situations. Nevertheless, when the parent points at the dog
and says ``doggie'', the child has to infer that ``doggie''
means dog and not finger. This is only possible because the
whole complex of repeated speech, gesture and situation has
an essentially unique interpretation.
Finally, the man in the Chinese room is doing what computers
do all the time, intepreting a program. When computers do this,
it is necessary not to confuse the capabilities of the a program
with capabilities of the machine or with the capabilities of other programs.
\closing
Sincerely,
John McCarthy
\endletter
\end